Empirical Bayes and the James–Stein Estimator
نویسنده
چکیده
Charles Stein shocked the statistical world in 1955 with his proof that maximum likelihood estimation methods for Gaussian models, in common use for more than a century, were inadmissible beyond simple oneor twodimensional situations. These methods are still in use, for good reasons, but Stein-type estimators have pointed the way toward a radically different empirical Bayes approach to high-dimensional statistical inference. We will be using empirical Bayes ideas for estimation, testing, and prediction, beginning here with their path-breaking appearance in the James–Stein formulation. Although the connection was not immediately recognized, Stein’s work was half of an energetic post-war empirical Bayes initiative. The other half, explicitly named “empirical Bayes” by its principal developer Herbert Robbins, was less shocking but more general in scope, aiming to show how frequentists could achieve full Bayesian efficiency in large-scale parallel studies. Large-scale parallel studies were rare in the 1950s, however, and Robbins’ theory did not have the applied impact of Stein’s shrinkage estimators, which are useful in much smaller data sets. All of this has changed in the 21st century. New scientific technologies, epitomized by the microarray, routinely produce studies of thousands of parallel cases — we will see several such studies in what follows — well-suited for the Robbins point of view. That view predominates in the succeeding chapters, though not explicitly invoking Robbins’ methodology until the very last section of the book. Stein’s theory concerns estimation, whereas the Robbins branch of empirical Bayes allows for hypothesis testing, that is, for situations where many or most of the true effects pile up at a specific point, usually called 0. Chapter 2 takes up large-scale hypothesis testing, where we will see, in Section 2.6, that the two branches are intertwined. Empirical Bayes theory blurs the distinction between estimation and testing as well as between fre-
منابع مشابه
Reversing the Stein Effect
The Reverse Stein Effect is identified and illustrated: A statistician who shrinks his/her data toward a point chosen without reliable knowledge about the underlying value of the parameter to be estimated but based instead upon the observed data will not be protected by the minimax property of shrinkage estimators such a" that of James and Stein, but instead will likely incur a greater error th...
متن کاملLimiting Properties of Empirical Bayes Estimators in a Two-Factor Experiment under Inverse Gaussian Model
The empirical Bayes estimators of treatment effects in a factorial experiment were derived and their asymptotic properties were explored. It was shown that they were asymptotically optimal and the estimator of the scale parameter had a limiting gamma distribution while the estimators of the factor effects had a limiting multivariate normal distribution. A Bootstrap analysis was performed to ill...
متن کاملApproximating Bayesian inference by weighted likelihood
The author proposes to use weighted likelihood to approximate Bayesian inference when no external or prior information is available. He proposes a weighted likelihood estimator that minimizes the empirical Bayes risk under relative entropy loss. He discusses connections among the weighted likelihood, empirical Bayes and James–Stein estimators. Both simulated and real data sets are used for illu...
متن کاملEmpirical Likelihood Based Posterior Expectation: from nonparametric posterior means via double empirical Bayesian estimators to nonparametric versions of the James-Stein estimator
Posterior expectation is a well-accepted method for data analysis via Bayesian inference based on parametric likelihoods. In this paper we propose utilizing empirical likelihood (EL) methodology to develop novel nonparametric posterior expectation. The parametric Bayesian methodology contains the empirical Bayes approach for the purpose of using the observed data to estimate parameters, or even...
متن کامل